Mixing Consistent Deep Clustering
نویسندگان
چکیده
Finding well-defined clusters in data represents a fundamental challenge for many data-driven applications, and largely depends on good representation. Drawing literature regarding representation learning, studies suggest that one key characteristic of latent representations is the ability to produce semantically mixed outputs when decoding linear interpolations two representations. We propose Mixing Consistent Deep Clustering (MCDC) method which encourages appear realistic while adding constraint points must look like inputs. By applying this training various clustering (non-)specific autoencoder models we found using proposed systematically changed structure learned model it improved performance tested ACAI, IDEC, VAE MNIST, SVHN, CIFAR-10 datasets. These outcomes have practical implications numerous real-world tasks, as shows can be added existing autoencoders further improve performance.
منابع مشابه
Clustering Consistent Sparse Subspace Clustering
Subspace clustering is the problem of clustering data points into a union of lowdimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image processing and machine learning. A line of recent work [4, 19, 24, 20] provided strong theoretical guarantee for sparse subspace clustering [4], the state-of-the-art algorithm for subspace clu...
متن کاملConsistent k-Clustering
The study of online algorithms and competitive analysis provides a solid foundation for studying the quality of irrevocable decision making when the data arrives in an online manner. While in some scenarios the decisions are indeed irrevocable, there are many practical situations when changing a previous decision is not impossible, but simply expensive. In this work we formalize this notion and...
متن کاملClustering by mixing flows.
We calculate the Lyapunov exponents for particles suspended in a random three-dimensional flow, concentrating on the limit where the viscous damping rate is small compared to the inverse correlation time. In this limit Lyapunov exponents are obtained as a power series in epsilon, a dimensionless measure of the particle inertia. Although the perturbation generates an asymptotic series, we obtain...
متن کاملBetter Mixing via Deep Representations
It has been hypothesized, and supported with experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation. We study the following related conjecture: better representations, in the sense of better disentangling, can be exploited to produce Markov chains that mix faster between modes. Consequently, mixing betw...
متن کاملDeep Sparse Subspace Clustering
In this paper, we present a deep extension of Sparse Subspace Clustering, termed Deep Sparse Subspace Clustering (DSSC). Regularized by the unit sphere distribution assumption for the learned deep features, DSSC can infer a new data affinity matrix by simultaneously satisfying the sparsity principle of SSC and the nonlinearity given by neural networks. One of the appealing advantages brought by...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Lecture Notes in Computer Science
سال: 2022
ISSN: ['1611-3349', '0302-9743']
DOI: https://doi.org/10.1007/978-3-030-95467-3_10